翻訳と辞書
Words near each other
・ IBM Docs
・ IBM DPPX
・ IBM during World War II
・ IBM Electric typewriter
・ IBM Electromatic Table Printing Machine
・ IBM Enterprise Identity Mapping
・ IBM Enterprise Storage Server
・ IBM ES/9000 family
・ IBM ESA/390
・ IBM eServer
・ IBM Extended Density Format
・ IBM Fellow
・ IBM FlashSystem
・ IBM Floating Point Architecture
・ IBM Future Systems project
IBM General Parallel File System
・ IBM Generalized Markup Language
・ IBM Global Mirror
・ IBM Global Services
・ IBM Haifa Research Laboratory
・ IBM Hakozaki Facility
・ IBM Hardware Management Console
・ IBM HAScript
・ IBM High Availability Cluster Multiprocessing
・ IBM High Level Assembler
・ IBM History Flow tool
・ IBM Home Page Reader
・ IBM HTTP Server
・ IBM Hursley
・ IBM i


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

IBM General Parallel File System : ウィキペディア英語版
IBM General Parallel File System

The General Parallel File System (GPFS) is a high-performance clustered file system developed by IBM. It can be deployed in shared-disk or shared-nothing distributed parallel modes. It is used by many of the world's largest commercial companies, as well as some of the supercomputers on the Top 500 List.〔
〕 For example, GPFS was the filesystem of the ASC Purple Supercomputer〔
(【引用サイトリンク】 publisher = IBM )
〕 which was composed of more than 12,000 processors and has 2 petabytes of total disk storage spanning more than 11,000 disks.
In common with typical cluster filesystems, GPFS provides concurrent high-speed file access to applications executing on multiple nodes of clusters. It can be used with AIX 5L clusters, Linux clusters, on Microsoft Windows Server, or a heterogeneous cluster of AIX, Linux and Windows nodes. In addition to providing filesystem storage capabilities, GPFS provides tools for management and administration of the GPFS cluster and allows for shared access to file systems from remote GPFS clusters.
GPFS has been available on IBM's AIX since 1998, on Linux since 2001, and on Windows Server since 2008, and it is offered as part of the IBM System Cluster 1350. GPFS 3.5 introduced Active File Management to enable asynchronous access and control of local and remote files, thus allowing for global file collaboration. The most recent version GPFS 4.1 introduces encryption.
IBM also sells GPFS as IBM Spectrum Scale, a branding for Software-Defined Storage (SDS).
==History==
GPFS began as the Tiger Shark file system, a research project at IBM's Almaden Research Center as early as 1993. Shark was initially designed to support high throughput multimedia applications. This design turned out to be well suited to scientific computing.
Another ancestor of GPFS is IBM's Vesta filesystem, developed as a research project at IBM's Thomas J. Watson Research Center between 1992-1995. Vesta introduced the concept of file partitioning to accommodate the needs of parallel applications that run on high-performance multicomputers with parallel I/O subsystems. With partitioning, a file is not a sequence of bytes, but rather multiple disjoint sequences that may be accessed in parallel. The partitioning is such that it abstracts away the number and type of I/O nodes hosting the filesystem, and it allows a variety of logical partitioned views of files, regardless of the physical distribution of data within the I/O nodes. The disjoint sequences are arranged to correspond to individual processes of a parallel application, allowing for improved scalability.
Vesta was commercialized as the PIOFS filesystem around 1994,〔
〕 and was succeeded by GPFS around 1998.〔
〕〔
〕 The main difference between the older and newer filesystems was that GPFS replaced the specialized interface offered by Vesta/PIOFS with the standard Unix API: all the features to support high performance parallel I/O were hidden from users and implemented under the hood.〔〔 Today, GPFS is used by many of the top 500 supercomputers listed on the (Top 500 Supercomputing Sites ) web site. Since inception GPFS has been successfully deployed for many commercial applications including digital media, grid analytics, and scalable file services.
In 2010 IBM previewed a version of GPFS that included a capability known as GPFS-SNC where SNC stands for Shared Nothing Cluster. This was officially released with GPFS 3.5 in December 2012, and is now known as GPFS-FPO

〕 (File Placement Optimizer). This allows GPFS to use locally attached disks on a cluster of network connected servers rather than requiring dedicated servers with shared disks (e.g. using a SAN). GPFS-FPO is suitable for workloads with high data locality such as shared nothing database clusters like SAP HANA and DB2 DPF, and can be used as a HDFS-compatible filesystem.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「IBM General Parallel File System」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.